-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a basic example that outlines a basic deferred rendering approach #1633
base: master
Are you sure you want to change the base?
Conversation
Thanks for your contribution. The added files need lots of explanatory comments: what do they do (at a high level), how, and why? Also, please add the JME copyright notice to the new files that lack it. |
@MeFisto94 Are you still interested in this PR? |
Yeah, sorry for this late reply, I've had it on my todo list but forgot about it multiple times. I guess you're more talking about the basics, though? Such as we're basically rendering everything into off screen buffers with the size of the screen and then later use a Quad Render (Filter) to compose and compute the lighting from the information that is present in said G-Buffer. And that we do this mainly to reduce costly vertex shader invocations (especially with animated models and high light counts) [which would be less of a problem when having a clever forward pipeline that we lack], but research has shown that this gives a lot of new opportunities as well though. |
I thought deferred rendering still has basically the same vertex invocations but it's the fragment invocations that are reduced? ...in the sense the the final render only does a full "pixel" render for things on the screen. ...I'm curious how deferred rendering would eliminate the need to at least render the shapes, though (ie: vertexes) |
Yes, A brief summary (similar to what you wrote above) would be fine with me. As usual, people who want to know all the gory details can read the code and/or look at outside resources. |
See that's why explaining that froms omeone that has merely understood the topic (me) is a bit risky, but maybe we can work together on something. So let me sum up on the shader differences: For the merge pass then, you have the benefit that you don't run on geometries anymore but on the buffer resolution, so you have a lot less actual lighting fragment shader invocations and a lot less costly overdraw (since that already happened at extraction time with simpler shaders). Also for reference: An actual better implementation (clustered deferred rendering), for which our API is not ready yet would add the following: I didn't do that yet, because we have a pending PR on SSBO but an, imo, even better jme fork from some guy that intended to contribute them back as well and I also had ideas on how to use annotations to create SSBO layouts etc, that we'd need to work on first, same with geometry and other shadertypes. |
Hi @MeFisto94 Thanks :) |
Hey @scenemax3d I think there is no collision, basically it was about #1782 being an excellent base for improving this example (but then it's not basic anymore) by using SSBO/UBOs, iirc. |
No description provided.